to free us from meaningless tasks
Infinitii ai
2025-04-25
“In his book Slow Productivity, Cal Newport argues that modern work culture prioritizes busyness over effectiveness, leading to stress, shallow work, and burnout.”
Daily routine struggles:
System failures:
# What my notes look like:
notes = [
{"date": "2025-03-15", "content": "Team mtg - follow up API keys"},
{"date": "2025-03-17", "content": "idea for deplymnt process"},
{"date": "2025-03-20", "content": "todo: revew PR for pipeline"}
]
# The problem:
# - Unstructured and disconnected
# - No time for proper review and organization
# - Valuable insights get lostThis isn’t just a personal problem. Studies show knowledge workers spend up to 20% of their time searching for information they’ve already seen or created.
Through AI we can automate the dull and time-consuming stuff to free up more creative potential of our minds
Slow Productivity AI Architecture
Your Digital Life
Your Digital Life
The magic happens when these sources are connected through a single semantic layer, allowing cross-referencing and discovery across previously siloed information.
| Component | Role | Slow Productivity Benefit |
|---|---|---|
| Hugging Face Transformers | Local NLP for meaning extraction | Privacy, no context-switching |
| n8n | Self-hosted automation workflows | Control, no third-party dependencies |
| Obsidian + GitHub Issues | Knowledge and task management | Centralized system, reduced tool switching |
| Local LLM | Same engine for all tools | Consistent context across interfaces |
# Core LLM setup
def setup_local_llm():
# Choose model based on your needs
"""
Options:
- Llama 3 8B: Good balance
- Mistral 7B: Efficient
- Phi-3 mini: Lightweight
"""
# Install Ollama
# curl -fsSL https://ollama.com/install.sh | sh
# Start service
# ollama pull llama3:8b
# OLLAMA_HOST=localhost:11434 ollama serveThe choice of LLM model is crucial - it affects both performance and quality. For most MacBook Pro users, Llama 3 8B provides the optimal balance between speed and capability.
from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms import Ollama
def process_obsidian_notes():
# Path to vault
OBSIDIAN_PATH = "/path/to/obsidian/vault"
# Load documents
documents = SimpleDirectoryReader(
OBSIDIAN_PATH, recursive=True
).load_data()
# Connect to LLM
llm = Ollama(model="llama3:8b", url="localhost:11434")
# Create index
index = VectorStoreIndex.from_documents(documents)
# Extract tasks
tasks = []
for doc in documents:
if "TODO" in doc.text or "- [ ]" in doc.text:
tasks.extend(extract_tasks(doc.text, llm))
return tasksn8n Workflow for Task Extraction
Workflows run while you sleep, so all your insights are processed and ready by morning.
# GitHub Issues integration
def create_github_tasks(tasks):
from github import Github
# Connect to GitHub
g = Github(os.environ.get("GITHUB_TOKEN"))
repo = g.get_repo("username/tasks-repo")
for task in tasks:
# Avoid duplicates
if not is_duplicate_task(repo, task["title"]):
# Create issue
repo.create_issue(
title=task["title"],
body=f"Source: {task['source']}",
labels=[task["priority"]]
)Demo Flow Visualization
Meeting with team (04/12)
discussed pipeline issues - john mentioned
permissions problem
plan:
- Montreal -> new model (testing and fixing) -> goal by EOW to setup next meeting
- Tesler pipelines -> technical debt due to iandi new deployments and scaling
- IandI in Tesler - weekly dwp
plan:
- Montreal -> new model (testing and fixing) -> goal by EOW to setup next meeting
- Tesler pipelines -> technical debt due to iandi new deployments and scaling
- IandI in Tesler - weekly dwp
| Property | Value |
|---|---|
| para_category | 3-Resources |
| type | note |
| status | active |
| enhanced_date | 2025-04-16 19:19:09 |
| enhanced_method | llm |
New model development, testing, and fixing
Goal: Have everything set up for the next meeting by EOW (End of Week)
Due to recent IandI deployments and scaling, there is a need to address technical debt in Tesler pipelines.
Continue to monitor and manage IandI’s Daily Work Plan (DWP) on a weekly basis.
[[ocm today]]
The transformation happens automatically without you having to change your note-taking behavior. Write naturally, and let AI do the structuring.
+------------------+
| Obsidian Vault |
| Change Trigger |
+--------+---------+
|
+------------------+
| Obsidian Vault |
| Change Trigger |
+--------+---------+
|
v
+--------+---------+
| Local LLM Node |
| (Extract Tasks) |
+--------+---------+
|
+------------------+ +----------------------+
| Obsidian Vault | | GitHub Issues API |
| Change Trigger | | (Create Issue) |
+--------+---------+ +-----------+----------+
| |
v |
+--------+---------+ |
| Local LLM Node | |
| (Extract Tasks) | |
+--------+---------+ |
| |
v |
+--------+---------+ |
| Task Processing | |
| (Format & Sort) | |
+--------+---------+ |
| |
v |
+--------+---------+ +---------------+----------+
| Filter Tasks +->+ Create GitHub Issue |
| (Priority > Med) | | for Each Extracted Task |
+------------------+ +--------------------------+
# Daily Focus (2025-04-25)
## Priority Tasks
1. [API-123] Finalize pipeline params
# Daily Focus (2025-04-25)
## Priority Tasks
1. [API-123] Finalize pipeline params
2. [INFRA-45] Review K8s configs
3. [TEAM-12] Prepare sprint planning
## Updates
- PR #234 merged (ML pipeline)
# Daily Focus (2025-04-25)
## Priority Tasks
1. [API-123] Finalize pipeline params
2. [INFRA-45] Review K8s configs
3. [TEAM-12] Prepare sprint planning
## Updates
- PR #234 merged (ML pipeline)
- QA checks passing
- Docs ready for review
## Focus Blocks
09:00-11:00: Deep work - API-123
13:00-14:30: Team meeting
15:00-17:00: Deep work - INFRA-45
Research shows that simply having a clear plan reduces decision fatigue by up to 40% and can save 30 minutes of planning time daily.
“The goal isn’t faster productivity, but deeper, more meaningful productivity.”
You don’t need to overhaul your entire workflow at once. Start with one pipeline, prove its value, and gradually expand your system.